150 research outputs found

    Talk 2: Recurrent neural nets and differentiable memory mechanism

    Get PDF
    This past year, RNNs have seen a lot of attention as powerful models that are able to decode sequences from signals. The key component of such methods are the use of a recurrent neural network architecture that is trained end-to-end to optimize the probability of the output sequence given those signals. In this talk, I’ll define the architecture and review some recent successes in my group on machine translation, image understanding, and beyond. On the second part of the talk, I will introduce a new paradigm — differentiable memory — that has enabled learning programs (e.g., planar Traveling Salesman Problem) using training instances via a powerful extension of RNNs with memory. This effectively turns a machine learning model into a “differentiable computer”. I will conclude the talk giving a few examples (e.g., AlphaGo) on how these recent Machine Learning advances have been the main catalyst in Artificial Intelligence in the past years

    Pooling-Invariant Image Feature Learning

    Full text link
    Unsupervised dictionary learning has been a key component in state-of-the-art computer vision recognition architectures. While highly effective methods exist for patch-based dictionary learning, these methods may learn redundant features after the pooling stage in a given early vision architecture. In this paper, we offer a novel dictionary learning scheme to efficiently take into account the invariance of learned features after the spatial pooling stage. The algorithm is built on simple clustering, and thus enjoys efficiency and scalability. We discuss the underlying mechanism that justifies the use of clustering algorithms, and empirically show that the algorithm finds better dictionaries than patch-based methods with the same dictionary size

    Sequence to Sequence Learning with Neural Networks

    Full text link
    Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT'14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous best result on this task. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.Comment: 9 page
    • …
    corecore